Although, speech recognition systems are widely used and their accuracies are continuously increased, there is a considerable performance gap between their accuracies and human recognition ability. This is partially due to high speaker variations in speech signal. Deep Neural Networks are among the best tools for acoustic modeling. Recently, using hybrid deep Neural Network and hidden Markov model (HMM) leads to considerable performance achievement in speech recognition problem because deep Networks model complex correlations between features. The main aim of this paper is to achieve a better acoustic modeling by changing the structure of deep Convolutional Neural Network ((CNN)) in order to adapt speaking variations. In this way, existing models and corresponding inference task have been improved and extended. Here, we propose adaptive windows Convolutional Neural Network (AW(CNN)) to analyze joint temporal-spectral features variation. AW(CNN) changes the structure of (CNN) and estimates the probabilities of HMM states. We propose adaptive windows Convolutional Neural Network in order to make the model more robust against the speech signal variations for both single speaker and among various speakers. This model can better model speech signals. The AW(CNN) method applies to the speech spectrogram and models time-frequency varieties. This Network handles speaker feature variations, speech signal varieties, and variations in phone duration. The obtained results and analysis on FARSDAT and TIMIT datasets show that, for phone recognition task, the proposed structure achieves 1. 2%, 1. 1% absolute error reduction with respect to (CNN) models respectively, which is a considerable improvement in this problem. Based on the results obtained by the conducted experiments, we conclude that the use of speaker information is very beneficial for recognition accuracy.